22 research outputs found

    Real-time Tracking of Guidewire Robot Tips using Deep Convolutional Neural Networks on Successive Localized Frames

    Get PDF
    Studies are proceeded to stabilize cardiac surgery using thin micro-guidewires and catheter robots. To control the robot to a desired position and pose, it is necessary to accurately track the robot tip in real time but tracking and accurately delineating the thin and small tip is challenging. To address this problem, a novel image analysis-based tracking method using deep convolutional neural networks (CNN) has been proposed in this paper. The proposed tracker consists of two parts; (1) a detection network for rough detection of the tip position and (2) a segmentation network for accurate tip delineation near the tip position. To learn a robust real-time tracker, we extract small image patches, including the tip in successive frames and then learn the informative spatial and motion features for the segmentation network. During inference, the tip bounding box is first estimated in the initial frame via the detection network, thereafter tip delineation is consecutively performed through the segmentation network in the following frames. The proposed method enables accurate delineation of the tip in real time and automatically restarts tracking via the detection network when tracking fails in challenging frames. Experimental results show that the proposed method achieves better tracking accuracy than existing methods, with a considerable real-time speed of 19ms.1

    Feature Re-calibration based Multiple Instance Learning for Whole Slide Image Classification

    Full text link
    Whole slide image (WSI) classification is a fundamental task for the diagnosis and treatment of diseases; but, curation of accurate labels is time-consuming and limits the application of fully-supervised methods. To address this, multiple instance learning (MIL) is a popular method that poses classification as a weakly supervised learning task with slide-level labels only. While current MIL methods apply variants of the attention mechanism to re-weight instance features with stronger models, scant attention is paid to the properties of the data distribution. In this work, we propose to re-calibrate the distribution of a WSI bag (instances) by using the statistics of the max-instance (critical) feature. We assume that in binary MIL, positive bags have larger feature magnitudes than negatives, thus we can enforce the model to maximize the discrepancy between bags with a metric feature loss that models positive bags as out-of-distribution. To achieve this, unlike existing MIL methods that use single-batch training modes, we propose balanced-batch sampling to effectively use the feature loss i.e., (+/-) bags simultaneously. Further, we employ a position encoding module (PEM) to model spatial/morphological information, and perform pooling by multi-head self-attention (PSMA) with a Transformer encoder. Experimental results on existing benchmark datasets show our approach is effective and improves over state-of-the-art MIL methods.Comment: MICCAI 202

    Enhancement of Perivascular Spaces Using Densely Connected Deep Convolutional Neural Network

    Get PDF
    Perivascular spaces (PVS) in the human brain are related to various brain diseases. However, it is difficult to quantify them due to their thin and blurry appearance. In this paper, we introduce a deeplearning-based method, which can enhance a magnetic resonance (MR) image to better visualize the PVS. To accurately predict the enhanced image, we propose a very deep 3D convolutional neural network that contains densely connected networks with skip connections. The proposed networks can utilize rich contextual information derived from low-level to high-level features and effectively alleviate the gradient vanishing problem caused by the deep layers. The proposed method is evaluated on 17 7T MR images by a twofold cross-validation. The experiments show that our proposed network is much more effective to enhance the PVS than the previous PVS enhancement methods.1

    Synthesize and Segment: Towards Improved Catheter Segmentation via Adversarial Augmentation

    Get PDF
    Automatic catheter and guidewire segmentation plays an important role in robot-assisted interventions that are guided by fluoroscopy. Existing learning based methods addressing the task of segmentation or tracking are often limited by the scarcity of annotated samples and difficulty in data collection. In the case of deep learning based methods, the demand for large amounts of labeled data further impedes successful application. We propose a synthesize and segment approach with plug in possibilities for segmentation to address this. We show that an adversarially learned image-to-image translation network can synthesize catheters in X-ray fluoroscopy enabling data augmentation in order to alleviate a low data regime. To make realistic synthesized images, we train the translation network via a perceptual loss coupled with similarity constraints. Then existing segmentation networks are used to learn accurate localization of catheters in a semi-supervised setting with the generated images. The empirical results on collected medical datasets show the value of our approach with significant improvements over existing translation baseline methods. Β© 2021 by the authors. Licensee MDPI, Basel, Switzerland.1

    Quantitative Assessment of Chest CT Patterns in COVID-19 and Bacterial Pneumonia Patients: a Deep Learning Perspective

    Get PDF
    Background: It is difficult to distinguish subtle differences shown in computed tomography (CT) images of coronavirus disease 2019 (COVID-19) and bacterial pneumonia patients, which often leads to an inaccurate diagnosis. It is desirable to design and evaluate interpretable feature extraction techniques to describe the patient’s condition. Methods: This is a retrospective cohort study of 170 confirmed patients with COVID-19 or bacterial pneumonia acquired at Yeungnam University Hospital in Daegu, Korea. The lung and lesion regions were segmented to crop the lesion into 2D patches to train a classifier model that could differentiate between COVID-19 and bacterial pneumonia. The K-means algorithm was used to cluster deep features extracted by the trained model into 20 groups. Each lesion patch cluster was described by a characteristic imaging term for comparison. For each CT image containing multiple lesions, a histogram of lesion types was constructed using the cluster information. Finally, a Support Vector Machine classifier was trained with the histogram and radiomics features to distinguish diseases and severity. Results: The 20 clusters constructed from 170 patients were reviewed based on common radiographic appearance types. Two clusters showed typical findings of COVID-19, with two other clusters showing typical findings related to bacterial pneumonia. Notably, there is one cluster that showed bilateral diffuse ground-glass opacities (GGOs) in the central and peripheral lungs and was considered to be a key factor for severity classification. The proposed method achieved an accuracy of 91.2% for classifying COVID-19 and bacterial pneumonia patients with 95% reported for severity classification. The CT quantitative parameters represented by the values of cluster 8 were correlated with existing laboratory data and clinical parameters. Conclusion: Deep chest CT analysis with constructed lesion clusters revealed well-known COVID-19 CT manifestations comparable to manual CT analysis. The constructed histogram features improved accuracy for both diseases and severity classification, and showed correlations with laboratory data and clinical parameters. The constructed histogram features can provide guidance for improved analysis and treatment of COVID-19. Β© 2021. The Korean Academy of Medical Sciences.1

    쑰직병리학 이미지 뢄석을 μœ„ν•œ μ•½μ§€λ„ν‘œν˜„ν•™μŠ΅

    No full text
    Computer aided diagnosis, Deep learning, Histopathology, Multiple instance learning, Weak supervisionI. Introduction 1 1 Background and Motivations 1 1.1 Weakly Supervised Learning 3 1.2 Multiple Instance Learning 4 1.3 Histopathology Image Analysis 6 2 Contributions and Outline 7 3 Publications 8 3.1 Excluded Research 9 II. End-to-End Multiple Instance Learning with Center Embeddings 11 1 Introduction 11 2 Related Works 12 2.1 Multiple Instance Learning for Histopathology 13 2.2 Unsupervised Methods with Multiple Instance Learning 13 3 Methodology 13 3.1 Top-k Instance Selection 14 3.2 Instance Level Learning 14 3.3 Pyramidal Bag-level Learning 15 3.4 Soft-Assignment based Inference 15 4 Evaluation 16 4.1 Dataset and Settings 16 4.2 Quantitative Results 17 4.3 Qualitative Results 18 5 Conclusion 19 III. Dual Multiple Instance Learning with Self-Supervision 20 1 Introduction 20 2 Related Works 22 2.1 Deep Learning for COVID-19 Diagnosis 22 2.2 COVID-19 Diagnosis under Weak Supervision 22 2.3 Self-Supervised Learning 23 3 Methodology 23 3.1 Dual Attention based Learning 25 3.2 Contrastive Multiple Instance Learning 26 4 Evaluation 27 4.1 Datasets and Settings 27 4.2 Quantitative Results 29 4.3 Qualitative Results 30 5 Discussion 31 6 Conclusion 32 IV. Weakly Supervised Segmentation in Gigapixel Images 33 1 Introduction 33 2 Related Works 35 2.1 Weakly Supervised Segmentation in Computer Vision 35 2.2 Patch-based Weakly Supervised Segmentation for Histopathology 36 2.3 Weakly Supervised Learning with Self-Training 36 3 Methodology 37 3.1 Contrastive Neural Compression 38 3.2 Self-Supervised Neural CAM Refinement 39 3.3 Ο„ -MaskOut: Spatial Feature Masking as Regularization 40 3.4 Pixel Correlation Module (PCM) 41 3.5 Conditional Entropy Minimization between CAMs 42 4 Evaluation 43 4.1 Datasets 43 4.2 Implementation Settings 44 4.3 Quantitative Results 46 4.4 Ablations on Learning Objectives 49 4.5 Qualitative Results 50 4.6 Ablations on Relative Tumor Sizes 51 5 Conclusion 52 V. Re-calibrating Feature Distributions in Histopathology 53 1 Introduction 53 2 Methodology 55 2.1 Preliminaries: A Simple Baseline 56 2.2 Feature Re-calibration & Max-Instance Selection 57 2.3 Positional Encoding Module (PEM) 57 2.4 MIL Pooling by Multi-head Self-Attention (PMSA) 58 3 Evaluation 59 3.1 Datasets and Settings 59 3.2 Main Results 59 3.3 Impact of Learning Objectives 61 4 Conclusion 61 VI. Concluding Remarks 62 1 Conclusion 62 2 Future Work 63 VII. Acknowledgement 64 References 65DoctordCollectio

    MULTIPLE INSTANCE LEARNING METHOD

    No full text

    MULTIPLE INSTANCE LEARNING FOR HISTOPATHOLOGY CLASSIFICATION

    No full text
    λ³Έ 발λͺ…은 쑰직 병리학 λΆ„λ₯˜λ₯Ό μœ„ν•œ 닀쀑 μΈμŠ€ν„΄μŠ€ ν•™μŠ΅ 방법에 κ΄€ν•œ κ²ƒμœΌλ‘œ, μ»΄ν“¨νŒ… μž₯치 λ˜λŠ” μ»΄ν“¨νŒ… λ„€νŠΈμ›Œν¬μ—μ„œ 적어도 ν•˜λ‚˜μ˜ ν”„λ‘œμ„Έμ„œμ— μ˜ν•΄ μˆ˜ν–‰λ˜λŠ” 쑰직 병리학 λΆ„λ₯˜λ₯Ό μœ„ν•œ 닀쀑 μΈμŠ€ν„΄μŠ€ ν•™μŠ΅ λ°©λ²•μœΌλ‘œμ„œ, νŠΉμ§• μΆ”μΆœ λͺ¨λΈ(FΞΈ(ㆍ))을 μ‹€ν–‰ν•˜μ—¬ i번째 μŠ¬λΌμ΄λ“œ 유래 μΈμŠ€ν„΄μŠ€(pij)λ₯Ό 저차원 μž„λ² λ”©(low dimensional embedding, gij)으둜 λ³€ν™˜ν•˜κ³ , 이진 λΆ„λ₯˜κΈ°λ₯Ό μ΄μš©ν•˜μ—¬ μΈμŠ€ν„΄μŠ€(pij)의 양성성을 확인 ν›„, λͺ¨λ“  λͺ¨μŒ(bags)의 μΈμŠ€ν„΄μŠ€ 레벨 ν™•λ₯ (instance level probabilities)을 λΆ„λ₯˜ν•˜μ—¬ ν•™μŠ΅μ„ μœ„ν•œ μŠ¬λΌμ΄λ“œλ‹Ή μ΅œμƒμœ„ μΈμŠ€ν„΄μŠ€λ₯Ό μƒ˜ν”Œλ§ν•˜λŠ” μΈμŠ€ν„΄μŠ€ 선택 단계와, 상기 μΈμŠ€ν„΄μŠ€ 선택 λ‹¨κ³„μ—μ„œ 얻어진 μΈμŠ€ν„΄μŠ€λ“€μ„ μ΄μš©ν•˜μ—¬ ν•™μŠ΅ν•˜λ˜, μΈμŠ€ν„΄μŠ€ 레벨 ν•™μŠ΅κ³Ό λͺ¨μŒ 레벨 ν•™μŠ΅μ„ 순차 μˆ˜ν–‰ν•˜μ—¬ μ΅œμ’… 둜슀λ₯Ό κ΅¬ν•˜λŠ” ν•™μŠ΅λ‹¨κ³„μ™€, 두 점 μ‚¬μ΄μ˜ μœ μ‚¬λ„λ₯Ό κ²€μΆœν•˜λŠ” 컀널을 μ΄μš©ν•˜μ—¬ λͺ¨μŒ 레벨 μž„λ² λ”©(zi)을 ν•™μŠ΅λœ 쀑심(learned centroid)에 λΆ„λ°°ν•˜λŠ” μ†Œν”„νŠΈ ν• λ‹Ή 기반 μΆ”λ‘  단계λ₯Ό 포함할 수 μžˆλ‹€

    DEVICE FOR IMAGE SYNTHESIS IN X-RAY AND METHOD THEREOF

    No full text
    λ³Έ κ°œμ‹œ(disclosure)의 λ‹€μ–‘ν•œ μ‹€μ‹œ μ˜ˆλ“€μ— λ”°λ₯΄λ©΄, X-ray μ˜μƒ 합성을 μœ„ν•œ μž₯치의 λ™μž‘ 방법은, 카메라 μ˜μƒμ„ μˆ˜μ‹ ν•˜λŠ” κ³Όμ •κ³Ό, 상기 카메라 μ˜μƒμ΄ ν¬ν•¨ν•˜λŠ” κ°€μ΄λ“œ 와이어 μ˜μƒμ—μ„œ κ°€μ΄λ“œ μ™€μ΄μ–΄μ˜ 쀑심선을 μΆ”μΆœν•˜λŠ” κ³Όμ •κ³Ό, 상기 κ°€μ΄λ“œ μ™€μ΄μ–΄μ˜ 쀑심선을 X-ray μ˜μƒκ³Ό κ²°ν•©ν•˜μ—¬ κ²°ν•© μ˜μƒμ„ μƒμ„±ν•˜λŠ” κ³Όμ •κ³Ό, 상기 κ²°ν•© μ˜μƒμ„ μ΄μš©ν•˜μ—¬, 지각 손싀(perceptual loss)을 κ³ λ €ν•œ X-ray ν•©μ„± μ˜μƒ λͺ¨λΈ 기반 기계 ν•™μŠ΅μ„ μˆ˜ν–‰ν•¨μœΌλ‘œμ¨, X-ray ν•©μ„± μ˜μƒμ„ μƒμ„±ν•˜λŠ” 과정을 포함할 수 μžˆλ‹€
    corecore